Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 18(10): e0286878, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37878605

RESUMO

Orthogonal polynomials and their moments have significant role in image processing and computer vision field. One of the polynomials is discrete Hahn polynomials (DHaPs), which are used for compression, and feature extraction. However, when the moment order becomes high, they suffer from numerical instability. This paper proposes a fast approach for computing the high orders DHaPs. This work takes advantage of the multithread for the calculation of Hahn polynomials coefficients. To take advantage of the available processing capabilities, independent calculations are divided among threads. The research provides a distribution method to achieve a more balanced processing burden among the threads. The proposed methods are tested for various values of DHaPs parameters, sizes, and different values of threads. In comparison to the unthreaded situation, the results demonstrate an improvement in the processing time which increases as the polynomial size increases, reaching its maximum of 5.8 in the case of polynomial size and order of 8000 × 8000 (matrix size). Furthermore, the trend of continuously raising the number of threads to enhance performance is inconsistent and becomes invalid at some point when the performance improvement falls below the maximum. The number of threads that achieve the highest improvement differs according to the size, being in the range of 8 to 16 threads in 1000 × 1000 matrix size, whereas at 8000 × 8000 case it ranges from 32 to 160 threads.


Assuntos
Algoritmos , Compressão de Dados , Software , Processamento de Imagem Assistida por Computador , Aumento da Imagem/métodos
2.
Sensors (Basel) ; 22(23)2022 Nov 26.
Artigo em Inglês | MEDLINE | ID: mdl-36501912

RESUMO

Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essential. To this end, this paper presents an efficient method for 3D object recognition with low computational complexity. Specifically, the proposed method uses a fast overlapped technique, which deals with higher-order polynomials and high-dimensional objects. The fast overlapped block-processing algorithm reduces the computational complexity of feature extraction. This paper also exploits Charlier polynomials and their moments along with support vector machine (SVM). The evaluation of the presented method is carried out using a well-known dataset, the McGill benchmark dataset. Besides, comparisons are performed with existing 3D object recognition methods. The results show that the proposed 3D object recognition approach achieves high recognition rates under different noisy environments. Furthermore, the results show that the presented method has the potential to mitigate noise distortion and outperforms existing methods in terms of computation time under noise-free and different noisy environments.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Reconhecimento Automatizado de Padrão/métodos , Processamento de Imagem Assistida por Computador/métodos , Máquina de Vetores de Suporte , Percepção Visual
3.
PeerJ Comput Sci ; 8: e1017, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35875642

RESUMO

Massive multiple-input multiple-output (massive-MIMO) is considered as the key technology to meet the huge demands of data rates in the future wireless communications networks. However, for massive-MIMO systems to realize their maximum potential gain, sufficiently accurate downlink (DL) channel state information (CSI) with low overhead to meet the short coherence time (CT) is required. Therefore, this article aims to overcome the technical challenge of DL CSI estimation in a frequency-division-duplex (FDD) massive-MIMO with short CT considering five different physical correlation models. To this end, the statistical structure of the massive-MIMO channel, which is captured by the physical correlation is exploited to find sufficiently accurate DL CSI estimation. Specifically, to reduce the DL CSI estimation overhead, the training sequence is designed based on the eigenvectors of the transmit correlation matrix. To this end, the achievable sum rate (ASR) maximization and the mean square error (MSE) of CSI estimation with short CT are investigated using the proposed training sequence design. Furthermore, this article examines the effect of channel hardening in an FDD massive-MIMO system. The results demonstrate that in high correlation scenarios, a large loss in channel hardening is obtained. The results reveal that increasing the correlation level reduces the MSE but does not increase the ASR. However, exploiting the spatial correction structure is still very essential for the FDD massive-MIMO systems under limited CT. This finding holds for all the physical correlation models considered.

4.
Entropy (Basel) ; 23(9)2021 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-34573787

RESUMO

Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.

5.
Sensors (Basel) ; 21(6)2021 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-33808986

RESUMO

Numeral recognition is considered an essential preliminary step for optical character recognition, document understanding, and others. Although several handwritten numeral recognition algorithms have been proposed so far, achieving adequate recognition accuracy and execution time remain challenging to date. In particular, recognition accuracy depends on the features extraction mechanism. As such, a fast and robust numeral recognition method is essential, which meets the desired accuracy by extracting the features efficiently while maintaining fast implementation time. Furthermore, to date most of the existing studies are focused on evaluating their methods based on clean environments, thus limiting understanding of their potential application in more realistic noise environments. Therefore, finding a feasible and accurate handwritten numeral recognition method that is accurate in the more practical noisy environment is crucial. To this end, this paper proposes a new scheme for handwritten numeral recognition using Hybrid orthogonal polynomials. Gradient and smoothed features are extracted using the hybrid orthogonal polynomial. To reduce the complexity of feature extraction, the embedded image kernel technique has been adopted. In addition, support vector machine is used to classify the extracted features for the different numerals. The proposed scheme is evaluated under three different numeral recognition datasets: Roman, Arabic, and Devanagari. We compare the accuracy of the proposed numeral recognition method with the accuracy achieved by the state-of-the-art recognition methods. In addition, we compare the proposed method with the most updated method of a convolutional neural network. The results show that the proposed method achieves almost the highest recognition accuracy in comparison with the existing recognition methods in all the scenarios considered. Importantly, the results demonstrate that the proposed method is robust against the noise distortion and outperforms the convolutional neural network considerably, which signifies the feasibility and the effectiveness of the proposed approach in comparison to the state-of-the-art recognition methods under both clean noise and more realistic noise environments.

6.
J Imaging ; 6(8)2020 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-34460696

RESUMO

Discrete Krawtchouk polynomials are widely utilized in different fields for their remarkable characteristics, specifically, the localization property. Discrete orthogonal moments are utilized as a feature descriptor for images and video frames in computer vision applications. In this paper, we present a new method for computing discrete Krawtchouk polynomial coefficients swiftly and efficiently. The presented method proposes a new initial value that does not tend to be zero as the polynomial size increases. In addition, a combination of the existing recurrence relations is presented which are in the n- and x-directions. The utilized recurrence relations are developed to reduce the computational cost. The proposed method computes approximately 12.5% of the polynomial coefficients, and then symmetry relations are employed to compute the rest of the polynomial coefficients. The proposed method is evaluated against existing methods in terms of computational cost and maximum size can be generated. In addition, a reconstruction error analysis for image is performed using the proposed method for large signal sizes. The evaluation shows that the proposed method outperforms other existing methods.

7.
Entropy (Basel) ; 20(4)2018 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-33265305

RESUMO

The recent increase in the number of videos available in cyberspace is due to the availability of multimedia devices, highly developed communication technologies, and low-cost storage devices. These videos are simply stored in databases through text annotation. Content-based video browsing and retrieval are inefficient due to the method used to store videos in databases. Video databases are large in size and contain voluminous information, and these characteristics emphasize the need for automated video structure analyses. Shot boundary detection (SBD) is considered a substantial process of video browsing and retrieval. SBD aims to detect transition and their boundaries between consecutive shots; hence, shots with rich information are used in the content-based video indexing and retrieval. This paper presents a review of an extensive set for SBD approaches and their development. The advantages and disadvantages of each approach are comprehensively explored. The developed algorithms are discussed, and challenges and recommendations are presented.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...